Goto

Collaborating Authors

 safe system


Use of Retrieval-Augmented Large Language Model Agent for Long-Form COVID-19 Fact-Checking

Huang, Jingyi, Yang, Yuyi, Ji, Mengmeng, Alba, Charles, Zhang, Sheng, An, Ruopeng

arXiv.org Artificial Intelligence

The COVID-19 infodemic calls for scalable fact-checking solutions that handle long-form misinformation with accuracy and reliability. This study presents SAFE (system for accurate fact extraction and evaluation), an agent system that combines large language models with retrieval-augmented generation (RAG) to improve automated fact-checking of long-form COVID-19 misinformation. SAFE includes two agents - one for claim extraction and another for claim verification using LOTR-RAG, which leverages a 130,000-document COVID-19 research corpus. An enhanced variant, SAFE (LOTR-RAG + SRAG), incorporates Self-RAG to refine retrieval via query rewriting. We evaluated both systems on 50 fake news articles (2-17 pages) containing 246 annotated claims (M = 4.922, SD = 3.186), labeled as true (14.1%), partly true (14.4%), false (27.0%), partly false (2.2%), and misleading (21.0%) by public health professionals. SAFE systems significantly outperformed baseline LLMs in all metrics (p < 0.001). For consistency (0-1 scale), SAFE (LOTR-RAG) scored 0.629, exceeding both SAFE (+SRAG) (0.577) and the baseline (0.279). In subjective evaluations (0-4 Likert scale), SAFE (LOTR-RAG) also achieved the highest average ratings in usefulness (3.640), clearness (3.800), and authenticity (3.526). Adding SRAG slightly reduced overall performance, except for a minor gain in clearness. SAFE demonstrates robust improvements in long-form COVID-19 fact-checking by addressing LLM limitations in consistency and explainability. The core LOTR-RAG design proved more effective than its SRAG-augmented variant, offering a strong foundation for scalable misinformation mitigation.


Artificial intelligence: Experts propose guidelines for safe systems

BBC News

Prime Minister Rishi Sunak has appointed Ian Hogarth, a tech entrepreneur and AI investor to lead an AI taskforce. Mr Hogarth told me this week he wanted "to better understand the risks associated with these frontier AI systems" and hold the companies who develop them accountable.


Researchers reveal how they would deal with an AI uprising

Daily Mail - Science & tech

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. It's perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, 'Matrix'-like, as some sort of human battery. And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. One leading expert say he would'appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive' Why should a superintelligence keep us around? I would argue that I am a good person who might have even helped to bring about the superintelligence itself.